Goto

Collaborating Authors

 joint policy search




Review for NeurIPS paper: Joint Policy Search for Multi-agent Collaboration with Imperfect Information

Neural Information Processing Systems

Additional Feedback: Questions/Comments - There is a slight inconsistency between Equations (1) and (3), where in (1) you have A(I(h)) and in (3) you have A(h) - Line 142 - What is meant by the notation with a bar over the v? I don't see this introduced anywhere. This is a bit confusing, since your main theorem involves the difference between two overbar v quantities. It seems like this might be the value of the root node under the policy, but that is not explicitly stated anywhere. It looks like you use the CFR1k strategy as a starting point for JPS. Do you experiment with using the other strategies (BAD and A2C) as starting points?


Review for NeurIPS paper: Joint Policy Search for Multi-agent Collaboration with Imperfect Information

Neural Information Processing Systems

This paper presents the concept of policy density change for collaborative imperfect information games. All the reviewers agree that the idea is novel, appreciating the results in small games and in a much larger game of bridge (in particular, a comparison vs. WBridge5). There are several problems identified that the reviewers agree to be characterized as minor enough to be address in the final copy. As noted, there are problems with the comparison to WBridge5 and the authors have agreed to change their claim as a result. Clarifications on the connections to CFR and subgame decomposition should be made.


Joint Policy Search for Multi-agent Collaboration with Imperfect Information

Neural Information Processing Systems

To learn good joint policies for multi-agent collaboration with incomplete information remains a fundamental challenge. On the other hand, directly modeling joint policy changes in incomplete information game is nontrivial due to complicated interplay of policies (e.g., upstream updates affect downstream state reachability). In this paper, we show global changes of game values can be decomposed to policy changes localized at each information set, with a novel term named \emph{policy-change density}. Based on this, we propose \emph{Joint Policy Search} (JPS) that iteratively improves joint policies of collaborative agents in incomplete information games, without re-evaluating the entire game. On multiple collaborative tabular games, JPS is proven to never worsen performance and can improve solutions provided by unilateral approaches (e.g, CFR), outperforming algorithms designed for collaborative policy learning (e.g. Furthermore, for real-world game whose states are too many to enumerate, \ours{} has an online form that naturally links with gradient updates.


Joint Policy Search for Multi-agent Collaboration with Imperfect Information

Tian, Yuandong, Gong, Qucheng, Jiang, Tina

arXiv.org Artificial Intelligence

To learn good joint policies for multi-agent collaboration with imperfect information remains a fundamental challenge. While for two-player zero-sum games, coordinate-ascent approaches (optimizing one agent's policy at a time, e.g., self-play) work with guarantees, in multi-agent cooperative setting they often converge to sub-optimal Nash equilibrium. On the other hand, directly modeling joint policy changes in imperfect information game is nontrivial due to complicated interplay of policies (e.g., upstream updates affect downstream state reachability). In this paper, we show global changes of game values can be decomposed to policy changes localized at each information set, with a novel term named policy-change density. Based on this, we propose Joint Policy Search(JPS) that iteratively improves joint policies of collaborative agents in imperfect information games, without re-evaluating the entire game. On multi-agent collaborative tabular games, JPS is proven to never worsen performance and can improve solutions provided by unilateral approaches (e.g, CFR), outperforming algorithms designed for collaborative policy learning (e.g. BAD). Furthermore, for real-world games, JPS has an online form that naturally links with gradient updates. We test it to Contract Bridge, a 4-player imperfect-information game where a team of $2$ collaborates to compete against the other. In its bidding phase, players bid in turn to find a good contract through a limited information channel. Based on a strong baseline agent that bids competitive bridge purely through domain-agnostic self-play, JPS improves collaboration of team players and outperforms WBridge5, a championship-winning software, by $+0.63$ IMPs (International Matching Points) per board over 1k games, substantially better than previous SoTA ($+0.41$ IMPs/b) under Double-Dummy evaluation.